HashiCorpの新プロダクトNomadとOttoを触ってみた

HashiCorpの新プロダクトNomadとOttoを触ってみた

Clock Icon2015.09.29

この記事は公開されてから1年以上経過しています。情報が古い可能性がありますので、ご注意ください。

ども、大瀧です。
VagrantTerraformで有名なHashiCorpのカンファレンスイベント、HashiConf 2015が今朝未明からポートランドで開催されています。そこでNomadとOttoという2つの新サービスが発表されました。両方とも発表直後に公開され、試せるようになっているのでサンプルを動かしてみた様子をレポートします。

Nomad

nomad

NomadはEasily deploy applications at any scaleというリード文からあるように、アプリケーションをデプロイするスケジューラです。あらかじめアプリケーションを実行するホストにエージェントをインストール、アプリケーションをジョブとして設定ファイル(*.nomad)に定義しておき、設定ファイルに従ってジョブを実行します。

デプロイツールやスケジューラは既に様々なプロダクトがありますが、Nomadは単発のジョブよりもサービスをメインとし、クラスタ構成のホストグループ上で最適な配置を行うところがウリのようです。他プロダクトとの比較ページの面々にあるAmazon ECSやKubernetes、Mesos with Aurora辺りからその様子がうかがえます。

現在、サポートするタスク(アプリケーション)はDockerコンテナ、スタンドアロンのJava Webアプリケーション、コマンドプロセスとQEMU(KVM)仮想マシンです。

セットアップ

Nomadは他の多くのHashiCorpプロダクトと同様、Go言語で開発されたワンバイナリです。インストールするOSに合わせたバイナリをダウンロードページからダウンロードし、PATHの通ったディレクトリに配置すれば完了です。今回はGetting StartedのページにあるVagrantfileを使って、VagrantによるLinux仮想マシン上でNomadを実行します。同様に試したい方はあらかじめローカルマシンにVagrantをインストールしておきましょう。

  • 動作確認環境
    • OS : OS X Yosemite
    • Vagrant : 1.7.4

vagrant upで仮想マシンを作成、起動するとブートストラップ処理でクラスタ構成のためのConsulやNomadエージェントが自動でインストールされます。仮想マシンの構成が完了したらSSH接続し、nomadコマンドを確認します。

$ wget https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/Vagrantfile
  :(中略) 
$ vagrant up
  :(中略) 
==> default: 2015-09-28 18:26:26 (2.10 MB/s) - ‘nomad’ saved [16390032/16390032]
==> default: Installing Nomad...
$ vagrant ssh
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Fri Jan 30 07:21:35 2015 from 10.0.2.2
vagrant@nomad:~$ nomad
usage: nomad [--version] [--help] <command> [<args>]

Available commands are:
    agent                 Runs a Nomad agent
    agent-info            Display status information about the local agent
    alloc-status          Display allocation status information and metadata
    client-config         View or modify client configuration details
    eval-monitor          Monitor an evaluation interactively
    init                  Create an example job file
    node-drain            Toggle drain mode on a given node
    node-status           Display status information about nodes
    run                   Run a new job or update an existing job
    server-force-leave    Force a server into the 'left' state
    server-join           Join server nodes together
    server-members        Display a list of known servers and their status
    status                Display status information about jobs
    stop                  Stop a running job
    validate              Checks if a given job specification is valid
    version               Prints the Nomad version
$

nomadコマンドがちゃんと入っていますね。続いてnomad agentコマンドでエージェントを起動します。本番用途であれば複数マシンでクラスタ構成を組みたいところですが、今回はお試しとして-devオプションを付与してスタンドアロンで構成します。

vagrant@nomad:~$ sudo nomad agent -dev
==> Starting Nomad agent...
2015/09/28 18:30:51 [ERR] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
==> Nomad agent configuration:

                 Atlas: <disabled>
                Client: true
             Log Level: DEBUG
                Region: global (DC: dc1)
                Server: true

==> Nomad agent started! Log data will stream in below:

    2015/09/28 18:30:49 [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
    2015/09/28 18:30:49 [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
    2015/09/28 18:30:49 [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
    2015/09/28 18:30:49 [INFO] client: using alloc directory /tmp/NomadClient036176414
    2015/09/28 18:30:49 [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
    2015/09/28 18:30:49 [WARN] fingerprint.network: Ethtool not found, checking /sys/net speed file
    2015/09/28 18:30:50 [WARN] raft: Heartbeat timeout reached, starting election
    2015/09/28 18:30:50 [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
    2015/09/28 18:30:50 [DEBUG] raft: Votes needed: 1
    2015/09/28 18:30:50 [DEBUG] raft: Vote granted. Tally: 1
    2015/09/28 18:30:50 [INFO] raft: Election won. Tally: 1
    2015/09/28 18:30:50 [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    2015/09/28 18:30:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
    2015/09/28 18:30:50 [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
    2015/09/28 18:30:50 [INFO] nomad: cluster leadership acquired
    2015/09/28 18:30:51 [DEBUG] client: applied fingerprints [arch cpu host memory storage network]
    2015/09/28 18:30:51 [DEBUG] client: available drivers 
    2015/09/28 18:30:51 [DEBUG] client: node registration complete
    2015/09/28 18:30:51 [DEBUG] client: updated allocations at index 1 (0 allocs)
    2015/09/28 18:30:51 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
    2015/09/28 18:30:51 [DEBUG] client: state updated to ready

これでNomadエージェントはOKです。

サンプルジョブの作成と実行

端末を開いたまま別の端末をもう一つ開きVagrant仮想マシンにSSH接続、nomad initでサンプルの設定ファイルを作成します。

$ vagrant ssh
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)
  :(中略)
vagrant@nomad:~$ nomad init
Example job file written to example.nomad
vagrant@nomad:~$

サンプルの設定にはそれなりにいろいろ書いてありますが、Dockerイメージredis:latestからDockerコンテナを1つを実行することが読み取れます。

# There can only be a single job definition per file.
# Create a job with ID and Name 'example'
job "example" {
	# Run the job in the global region, which is the default.
	# region = "global"

	# Specify the datacenters within the region this job can run in.
	datacenters = ["dc1"]

	# Service type jobs optimize for long-lived services. This is
	# the default but we can change to batch for short-lived tasks.
	# type = "service"

	# Priority controls our access to resources and scheduling priority.
	# This can be 1 to 100, inclusively, and defaults to 50.
	# priority = 50

	# Restrict our job to only linux. We can specify multiple
	# constraints as needed.
	constraint {
		attribute = "$attr.kernel.name"
		value = "linux"
	}

	# Configure the job to do rolling updates
	update {
		# Stagger updates every 10 seconds
		stagger = "10s"

		# Update a single task at a time
		max_parallel = 1
	}

	# Create a 'cache' group. Each task in the group will be
	# scheduled onto the same machine.
	group "cache" {
		# Control the number of instances of this groups.
		# Defaults to 1
		# count = 1

		# Define a task to run
		task "redis" {
			# Use Docker to run the task.
			driver = "docker"

			# Configure Docker driver with the image
			config {
				image = "redis:latest"
			}

			# We must specify the resources required for
			# this task to ensure it runs on a machine with
			# enough capacity.
			resources {
				cpu = 500 # 500 Mhz
				memory = 256 # 256MB
				network {
					mbits = 10
					dynamic_ports = ["redis"]
				}
        datacenters = ["dc1"]
			}
		}
	}
}

では、nomad run <設定ファイル名>で実行してみます。実行後は、nomad status <ジョブ名>でジョブの状態が確認できます。「Allocations」のあとに1行が長いですが「Desired」(望ましい状態)列にrun、「Status」列にpendingとあるので、アプリケーション(今回はDockerコンテナ)が起動中ということがわかります。

vagrant@nomad:~$ nomad run example.nomad
==> Monitoring evaluation "21e50e36-e1c4-6a40-970d-a56d50d15b41"
    Evaluation triggered by job "example"
    Allocation "7e041ae9-b1da-f5a4-ec09-5aeb44de7171" created: node "f840a518-fe7d-b263-e721-8443807c85f6", group "cache"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "21e50e36-e1c4-6a40-970d-a56d50d15b41" finished with status "complete"
vagrant@nomad:~$ nomad status example
ID          = example
Name        = example
Type        = service
Priority    = 50
Datacenters = dc1
Status      = <none>

==> Evaluations
ID                                    Priority  TriggeredBy   Status
21e50e36-e1c4-6a40-970d-a56d50d15b41  50        job-register  complete

==> Allocations
ID                                    EvalID                                NodeID                                TaskGroup  Desired  Status
7e041ae9-b1da-f5a4-ec09-5aeb44de7171  21e50e36-e1c4-6a40-970d-a56d50d15b41  f840a518-fe7d-b263-e721-8443807c85f6  cache      run      pending
vagrant@nomad:~$

Getting Startedのページでは、このあとコンテナの実行数を変更したり、イメージのタグを変更してローリングアップデートを実行する様子が紹介されています。

Otto

otto

Ottoは開発者向け仮想マシン管理ツールVagrantの後継ツールと位置づけられています。Vagrantは開発環境の構築が主でしたが、Ottoでは開発環境に加えて本番環境へのデプロイも視野に入れotto *のシングルコマンドで開発者が必要とする全ての環境をカバーするコンセプトになっています。

セットアップ

  • 動作確認環境
    • OS : OS X Yosemite
    • Vagrant : 1.7.4

OttoもNomad同様、バイナリファイルの配置のみでインストール完了です。今回はダウンロードページから対応するOSのファイルをダウンロード、展開し、~/bin以下に配置しました。

$ otto
usage: otto [--version] [--help] <command> [<args>]

Available commands are:
    build      Build the deployable artifact for the app
    compile    Prepares your project for being run.
    deploy     Deploy the application
    dev        Start and manage a development environment
    infra      Builds the infrastructure for the Appfile
    status     Status of the stages of this application
    version    Prints the Otto version
$

今回はGetting StartedのページにあるサンプルのRackアプリケーションを利用してみます。

$ git clone https://github.com/hashicorp/otto-getting-started.git
Cloning into 'otto-getting-started'...
remote: Counting objects: 23, done.
remote: Total 23 (delta 0), reused 0 (delta 0), pack-reused 23
Unpacking objects: 100% (23/23), done.
Checking connectivity... done.
$ cd otto-getting-started
$ ls
Gemfile       Gemfile.lock  README.md     app.rb        config.ru     views/
$

Ottoの自動構成

Ottoにはアプリケーションの自動判別機能があり、それを元にOttoの設定ファイルを自動生成するotto compileを実行してみます。実行結果の中に(ruby)というように、Rubyアプリケーションと判別されているメッセージが見えますね。

$ otto compile
==> Loading Appfile...
==> No Appfile found! Detecting project information...
    No Appfile was found. If there is no Appfile, Otto will do its best
    to detect the type of application this is and set reasonable defaults.
    This is a good way to get started with Otto, but over time we recommend
    writing a real Appfile since this will allow more complex customizations,
    the ability to reference dependencies, versioning, and more.
==> Fetching all Appfile dependencies...
==> Compiling...
    Application:    otto-getting-started (ruby)
    Project:        otto-getting-started
    Infrastructure: aws (simple)

    Compiling infra...
    Compiling foundation: consul
==> Compiling main application...
==> Compilation success!
    This means that Otto is now ready to start a development environment,
    deploy this application, build the supporting infastructure, and
    more. See the help for more information.

    Supporting files to enable Otto to manage your application from
    development to deployment have been placed in the output directory.
    These files can be manually inspected to determine what Otto will do.
$

パッと見た感じ何が起きたのかわからないですが、アプリケーションディレクトリ以下の.otto/にOttoが生成したファイルが出力されています。このあとのottoコマンドはこれらのファイルを元に動作します。

$ ls -a
./            .git/         .ottoid       Gemfile.lock  app.rb        views/
../           .otto/        Gemfile       README.md     config.ru
$ ls .otto
appfile/  compiled/ data/
$

なお、今回はシンプルなRubyアプリケーションですが、実行環境をいろいろいじる必要が出てくると思いますので、これらの自動構成をカスタマイズする方法もあります。

開発環境の構築

では、自動構成を元にローカルマシンでの開発環境構築をotto devで実行してみます。Ottoの現バージョンでは内部でVagrantの仮想マシンを作成し、そこにアプリケーションをデプロイします。

$ otto dev
==> Creating local development environment with Vagrant if it doesn't exist...
    Raw Vagrant output will begin streaming in below. Otto does
    not create this output. It is mirrored directly from Vagrant
    while the development environment is being created.

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'hashicorp/precise64'...
  :(中略)0
==> default: Mounting shared folders...
    default: /vagrant => /Users/ryuta/Desktop/otto-getting-started
    default: /otto/foundation-1 => /Users/ryuta/Desktop/otto-getting-started/.otto/compiled/app/foundation-consul/app-dev
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: stdin: is not a tty
==> default: [otto] Installing Consul...
==> default: [otto] Installing dnsmasq for Consul...
==> default: [otto] Configuring consul service: otto-getting-started
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: stdin: is not a tty
==> default: [otto] Adding apt repositories and updating...
==> default: [otto] Installing Ruby 2.2 and supporting packages...
==> default: [otto] Installing Bundler...
==> default: [otto] Configuring Git to use SSH instead of HTTP so we can agent-forward private repo auth...

==> Caching SSH credentials from Vagrant...
==> Development environment successfully created!
    IP address: 172.16.1.83

    A development environment has been created for writing a generic
    Ruby-based app.

    Ruby is pre-installed. To work on your project, edit files locally on your
    own machine. The file changes will be synced to the development environment.

    When you're ready to build your project, run 'otto dev ssh' to enter
    the development environment. You'll be placed directly into the working
    directory where you can run 'bundle' and 'ruby' as you normally would.

    You can access any running web application using the IP above.
$

できたようです。続いてSSH接続(otto dev ssh)し、Rubyアプリケーションを起動(bundle && rackup --host 0.0.0.0)します。

$ otto dev ssh
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
New release '14.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Welcome to your Vagrant-built virtual machine.
Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2
vagrant@precise64:/vagrant$ bundle && rackup --host 0.0.0.0
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/..
Installing rack 1.6.4
Installing rack-protection 1.5.3
Installing tilt 2.0.1
Installing sinatra 1.4.6
Using bundler 1.10.6
Bundle complete! 1 Gemfile dependency, 5 gems now installed.
Use `bundle show [gemname]` to see where a bundled gem is installed.
[2015-09-28 19:22:22] INFO  WEBrick 1.3.1
[2015-09-28 19:22:22] INFO  ruby 2.2.3 (2015-08-18) [x86_64-linux-gnu]
[2015-09-28 19:22:22] INFO  WEBrick::HTTPServer#start: pid=7070 port=9292

ブラウザを開き、<実行結果に表示されたIPアドレス>:9292にアクセスしてみると...

otto02

Rubyアプリケーションが正常に動作しています! 確認したあとは、端末でCrtl + Cを押下しアプリケーションを終了、exitコマンドでSSH接続を切断しましょう。

本番環境の構築

続いて本番環境のベースを構築します。現時点ではTerraformによるAWSの2タイプのVPC構成がサポートされています。今回は自動構成のままsimple構成を作成するotto infraを実行します。AWSのAPIキーを聞かれるので、それぞれ入力します *1

$ otto infra
==> Detecting infrastructure credentials for: otto-getting-started (aws)
    Cached and encrypted infrastructure credentials found.
    Otto will now ask you for the password to decrypt these
    credentials.

AWS Access Key
  AWS access key used for API calls.

  Enter a value: AKIAXXXXXXXXXXXXXXXXX
  
AWS Secret Key
  AWS secret key used for API calls.

  Enter a value: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

SSH Public Key Path
  Path to an SSH public key that will be granted access to EC2 instances

  Default: ~/.ssh/id_rsa.pub
  Enter a value:

Password for Encrypting Credentials
  This password will be used to encrypt and save the credentials so they
  don't need to be repeated multiple times.

  Enter a value:
==> Building main infrastructure...
==> Executing Terraform to manage infrastructure...
    Raw Terraform output will begin streaming in below. Otto
    does not create this output. It is mirrored directly from
    Terraform while the infrastructure is being created.

    Terraform may ask for input. For infrastructure provider
    credentials, be sure to enter the same credentials
    consistently within the same Otto environment.

aws_vpc.main: Refreshing state... (ID: vpc-0de83969)
aws_subnet.public: Refreshing state... (ID: subnet-dbb977e6)
aws_internet_gateway.public: Refreshing state... (ID: igw-ff2e119a)
aws_key_pair.main: Refreshing state... (ID: otto-0de83969)
aws_route_table.public: Refreshing state... (ID: rtb-b18006d5)
aws_route_table_association.public: Refreshing state... (ID: rtbassoc-22f3a746)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

  infra_id      = 0de83969
  key_name      = otto-0de83969
  region        = us-east-1
  subnet_public = subnet-dbb977e6
  vpc_cidr      = 10.0.0.0/16
  vpc_id        = vpc-0de83969

==> Terraform execution complete. Saving results...
==> Building infrastructure for foundation: consul
Get: file:///Users/ryuta/Desktop/otto-getting-started/.otto/compiled/foundation-consul/deploy/module-aws-simple

==> Terraform execution complete. Saving results...
module.consul-1.aws_security_group.consul: Refreshing state... (ID: sg-9ff415f9)
module.consul-1.aws_instance.consul: Creating...
  ami:                               "" => "ami-7f6a1f1a"
  availability_zone:                 "" => "<computed>"
  ebs_block_device.#:                "" => "<computed>"
  ephemeral_block_device.#:          "" => "<computed>"
  instance_type:                     "" => "t2.micro"
  key_name:                          "" => "otto-0de83969"
  placement_group:                   "" => "<computed>"
  private_dns:                       "" => "<computed>"
  private_ip:                        "" => "10.0.2.6"
  public_dns:                        "" => "<computed>"
  public_ip:                         "" => "<computed>"
  root_block_device.#:               "" => "<computed>"
  security_groups.#:                 "" => "<computed>"
  source_dest_check:                 "" => "1"
  subnet_id:                         "" => "subnet-dbb977e6"
  tags.#:                            "" => "1"
  tags.Name:                         "" => "consul 1"
  tenancy:                           "" => "<computed>"
  vpc_security_group_ids.#:          "" => "1"
  vpc_security_group_ids.2998911229: "" => "sg-9ff415f9"
module.consul-1.aws_instance.consul: Provisioning with 'file'...
module.consul-1.aws_instance.consul: Provisioning with 'remote-exec'...
module.consul-1.aws_instance.consul (remote-exec): Connecting to remote host via SSH...
module.consul-1.aws_instance.consul (remote-exec):   Host: 52.3.246.15
module.consul-1.aws_instance.consul (remote-exec):   User: ubuntu
module.consul-1.aws_instance.consul (remote-exec):   Password: false
module.consul-1.aws_instance.consul (remote-exec):   Private key: false
module.consul-1.aws_instance.consul (remote-exec):   SSH Agent: true
module.consul-1.aws_instance.consul (remote-exec): Connected!
module.consul-1.aws_instance.consul (remote-exec): consul stop/waiting
module.consul-1.aws_instance.consul (remote-exec): consul start/running, process 1333
module.consul-1.aws_instance.consul: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: /var/folders/mq/1x6h906n34n1r0ttkfgp724h0000gq/T/otto-tf513759639/state

Outputs:

  consul_address = 10.0.2.6

==> Terraform execution complete. Saving results...
==> Infrastructure successfully created!
    The infrastructure necessary to deploy this application
    is now available. You can now deploy using `otto deploy`.
$

AWSでのベース部分が作成されました。

本番環境のビルド

Ottoでは、本番環境にデプロイするAMIイメージをotto buildコマンドからPackerで生成します。APIキーはotto infraで入力したものを使い回すために最初にパスワードが聞かれます。

$ otto build
==> Detecting infrastructure credentials for: otto-getting-started (aws)
    Cached and encrypted infrastructure credentials found.
    Otto will now ask you for the password to decrypt these
    credentials.

Encrypted Credentials Password
  Infrastructure credentials are required for this operation. Otto found
  saved credentials that are password protected. Please enter the password
  to decrypt these credentials. You may also just hit <enter> and leave
  the password blank to force Otto to ask for the credentials again.

  Enter a value:
Would you like Otto to install Packer?
  Otto requires packer to be installed, but it couldn't be found on your
  system. Otto can install the latest version of packer for you. Otto will
  install this into its own private data directory so it doesn't conflict
  with anything else on your system. Would you like Otto to install packer
  for you? Alternatively, you may install this on your own.

  If you answer yes, Otto will install packer version 0.8.6.

  Please enter 'yes' to continue. Any other value will exit.

  Enter a value: yes

==> Downloading packer v0.8.6...
    URL: https://dl.bintray.com/mitchellh/packer/packer_0.8.6_darwin_amd64.zip

    132 MB/132 MB
==> Unzipping downloaded package...
==> packer installed successfully!
==> Querying infrastructure data for build...
==> Building deployment archive...
==> Building deployment artifact with Packer...
    Raw Packer output will begin streaming in below. Otto
    does not create this output. It is mirrored directly from
    Packer while the build is being run.

otto output will be in this color.

==> otto: Prevalidating AMI Name...
==> otto: Inspecting the source AMI...
==> otto: Creating temporary keypair: packer 56099801-cfd8-754d-4cbd-839edc4b61ed
==> otto: Creating temporary security group for this instance...
==> otto: Authorizing access to port 22 the temporary security group...
==> otto: Launching a source AWS instance...
    otto: Instance ID: i-c2e5e561
==> otto: Waiting for instance (i-c2e5e561) to become ready...
==> otto: Waiting for SSH to become available...
==> otto: Connected to SSH!
==> otto: Provisioning with shell script: /var/folders/mq/1x6h906n34n1r0ttkfgp724h0000gq/T/packer-shell478998183
==> otto: Uploading /Users/ryuta/Desktop/otto-getting-started/.otto/compiled/app/foundation-consul/app-build/ => /tmp/otto/foundation-1
==> otto: Provisioning with shell script: /var/folders/mq/1x6h906n34n1r0ttkfgp724h0000gq/T/packer-shell166143158
    otto: [otto] Installing Consul...
    otto: [otto] Installing dnsmasq for Consul...
    otto: [otto] Configuring consul service: otto-getting-started
==> otto: Uploading /var/folders/mq/1x6h906n34n1r0ttkfgp724h0000gq/T/otto-slug-993915578 => /tmp/otto-app.tgz
==> otto: Provisioning with shell script: build-ruby.sh
    otto: [otto] Waiting for cloud-config to complete...
    otto: [otto] Adding apt repositories and updating...
    otto: [otto] Installing Ruby, Passenger, Nginx, and other packages...
    otto: [otto] Installing Bundler...
    otto: [otto] Extracting app...
    otto: [otto] Adding application user...
    otto: [otto] Setting permissions...
    otto: [otto] Configuring nginx...
    otto: [otto] Bundle installing the app...
    otto: Fetching gem metadata from https://rubygems.org/..........
    otto: Fetching version metadata from https://rubygems.org/..
    otto: Installing rack 1.6.4
    otto: Installing rack-protection 1.5.3
    otto: Installing tilt 2.0.1
    otto: Installing sinatra 1.4.6
    otto: Using bundler 1.10.6
    otto: Bundle complete! 1 Gemfile dependency, 5 gems now installed.
    otto: Gems in the groups development and test were not installed.
    otto: Bundled gems are installed into ./vendor/bundle.
    otto: [otto] ...done!
==> otto: Stopping the source instance...
==> otto: Waiting for the instance to stop...
==> otto: Creating the AMI: otto-getting-started 1443469313
    otto: AMI: ami-59e3993c
==> otto: Waiting for AMI to become ready...
==> otto: Terminating the source AWS instance...
==> otto: Cleaning up any extra volumes...
==> otto: No volumes to clean up, skipping
==> otto: Deleting temporary security group...
==> otto: Deleting temporary keypair...
Build 'otto' finished.

==> Builds finished. The artifacts of successful builds are:
--> otto: AMIs were created:

us-east-1: ami-59e3993c
==> Storing build data in directory...
==> Build success!
    The build was completed successfully and stored within
    the directory service, meaning other members of your team
    don't need to rebuild this same version and can deploy it
    immediately.

本番環境用のAMIが出来たようです。

本番環境のデプロイ

では、最後に本番マシンの作成とアプリのデプロイを行うotto deployを実行します。

$ otto deploy
==> Detecting infrastructure credentials for: otto-getting-started (aws)
    Cached and encrypted infrastructure credentials found.
    Otto will now ask you for the password to decrypt these
    credentials.

Encrypted Credentials Password
  Infrastructure credentials are required for this operation. Otto found
  saved credentials that are password protected. Please enter the password
  to decrypt these credentials. You may also just hit <enter> and leave
  the password blank to force Otto to ask for the credentials again.

  Enter a value:
aws_security_group.app: Creating...
  description:                         "" => "Managed by Terraform"
  egress.#:                            "" => "1"
  egress.482069346.cidr_blocks.#:      "" => "1"
  egress.482069346.cidr_blocks.0:      "" => "0.0.0.0/0"
  egress.482069346.from_port:          "" => "0"
  egress.482069346.protocol:           "" => "-1"
  egress.482069346.security_groups.#:  "" => "0"
  egress.482069346.self:               "" => "0"
  egress.482069346.to_port:            "" => "0"
  ingress.#:                           "" => "1"
  ingress.482069346.cidr_blocks.#:     "" => "1"
  ingress.482069346.cidr_blocks.0:     "" => "0.0.0.0/0"
  ingress.482069346.from_port:         "" => "0"
  ingress.482069346.protocol:          "" => "-1"
  ingress.482069346.security_groups.#: "" => "0"
  ingress.482069346.self:              "" => "0"
  ingress.482069346.to_port:           "" => "0"
  name:                                "" => "otto-getting-started-0de83969"
  owner_id:                            "" => "<computed>"
  vpc_id:                              "" => "vpc-0de83969"
aws_security_group.app: Creation complete
aws_instance.app: Creating...
  ami:                               "" => "ami-59e3993c"
  availability_zone:                 "" => "<computed>"
  ebs_block_device.#:                "" => "<computed>"
  ephemeral_block_device.#:          "" => "<computed>"
  instance_type:                     "" => "t2.micro"
  key_name:                          "" => "otto-0de83969"
  placement_group:                   "" => "<computed>"
  private_dns:                       "" => "<computed>"
  private_ip:                        "" => "<computed>"
  public_dns:                        "" => "<computed>"
  public_ip:                         "" => "<computed>"
  root_block_device.#:               "" => "<computed>"
  security_groups.#:                 "" => "<computed>"
  source_dest_check:                 "" => "1"
  subnet_id:                         "" => "subnet-dbb977e6"
  tags.#:                            "" => "1"
  tags.Name:                         "" => "otto-getting-started"
  tenancy:                           "" => "<computed>"
  vpc_security_group_ids.#:          "" => "1"
  vpc_security_group_ids.3309454720: "" => "sg-29c1204f"
aws_instance.app: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: /var/folders/mq/1x6h906n34n1r0ttkfgp724h0000gq/T/otto-tf068637748/state

Outputs:

  url = http://ec2-XX-XX-XX-XX.compute-1.amazonaws.com/
$

最後に表示されるURLにアクセスすると、開発環境と同じRubyアプリケーションの画面が表示されます!

otto03

後片付け

Ottoには、使用後の環境をまとめて削除する機能があります。それぞれ以下のコマンドで作成されたリソースが全て削除されます。

$ otto deploy destroy
$ otto infra destroy
$ otto dev destroy

まとめ

サンプルではありますが、NomadとOttoを実行する様子をご紹介しました。まだリリース直後で機能少なめな状況ですが、スピーディーな開発がHashiCorpの特徴だとも思いますので、2つの新プロダクトの今後に期待しましょう!

余談

そういえば、最初Ottoを試したときはAMIが見つからないというエラーに遭遇し、以下のIssueにたどり着きました。

AMIをPublicにしていなかっただけの、Paulさんのポカミスだった訳ですがリリース直後らしい微笑ましい一幕でした。

脚注

  1. 最後のパスワードの設定は、一度入力したAPIキーを内部で暗号化して保持し、あとでロードする際に入力して利用します

この記事をシェアする

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.